本文介绍了一个新的多模式介入放射学数据集,称为POCAP(端口导管放置)语料库。该语料库由德语,X射线图像的语音和音频信号组成,以及六名外科医生从31个POCAP干预措施收集的系统命令,平均持续时间为81.4 $ \ pm $ 41.0分钟。该语料库旨在为在手术室中开发智能语音助理提供资源。特别是,它可用于开发语音控制的系统,该系统使外科医生能够控制操作参数,例如C臂运动和表位置。为了记录数据集,我们获得了Erlangen大学医院和患者数据隐私的机构审查委员会和工人委员会的同意。我们描述了录制设置,数据结构,工作流程和预处理步骤,并使用预告片的模型以11.52 $ \%$单词错误率报告了第一个POCAP语料库语音识别分析结果。研究结果表明,数据有可能构建强大的命令识别系统,并将使用医学领域中的语音和图像处理来开发新颖的干预支持系统。
translated by 谷歌翻译
语音可理解性评估在患有病理语音疾病的患者的治疗中起着重要作用。需要自动和客观的措施,以帮助治疗师进行传统的主观和劳动密集型评估。在这项工作中,我们研究了一种新的方法,该方法是使用从健康的参考和病理扬声器获得的平行话语对的分离潜在语音表示中的差异来获得这种度量的。使用每个扬声器的所有可用话语,在英语数据库上进行了英语数据库,显示出高和显着的相关值(r = -0.9),具有主观的可理解性指标,而在四个不同的参考扬声器对中仅具有最小的偏差(+-0.01) 。我们还通过考虑每个扬声器的话语少得多,在1000次迭代中偏离1000次迭代的 +-0.02偏离 +-0.02)也证明了稳健性。我们的结果之一是最早表明可以使用删除的语音表示形式用于自动病理语音可理解性评估,从而产生了参考扬声器对不变方法,适用于仅有几个话语的场景。
translated by 谷歌翻译
冷冻切片(FS)是手术操作期间组织微观评估的制备方法。该程序的高速允许病理学医师快速评估关键的微观特征,例如肿瘤边距和恶性地位,以引导手术决策,并尽量减少对操作过程的干扰。然而,FS容易引入许多误导性的人工结构(组织学人工制品),例如核冰晶,压缩和切割人工制品,妨碍了病理学家的及时和准确的诊断判断。额外的培训和长期经验通常需要对冻结部分进行高度有效和时间关键的诊断。另一方面,福尔马林固定和石蜡嵌入(FFPE)的黄金标准组织制备技术提供了显着优越的图像质量,而是一种非常耗时的过程(12-48小时),使其不适合术语用。在本文中,我们提出了一种人工智能(AI)方法,通过在几分钟内将冻结的整个幻灯片(FS-WSIS)计算冻结的整个幻灯片(FS-WSIS)来改善FS图像质量。 AI-FFPE将FS人工制品终止了注意力机制的指导,该引导机制在利用FS输入图像和合成的FFPE样式图像之间利用建立的自正则化机制,以及综合相关特征的合成的FFPE样式图像。结果,AI-FFPE方法成功地生成了FFPE样式图像,而不会显着扩展组织处理时间,从而提高诊断准确性。我们证明了使用各种不同的定性和定量度量,包括来自20个董事会认证的病理学家的视觉图灵测试的各种不同的定性和定量度量。
translated by 谷歌翻译
We show for the first time that large-scale generative pretrained transformer (GPT) family models can be pruned to at least 50% sparsity in one-shot, without any retraining, at minimal loss of accuracy. This is achieved via a new pruning method called SparseGPT, specifically designed to work efficiently and accurately on massive GPT-family models. When executing SparseGPT on the largest available open-source models, OPT-175B and BLOOM-176B, we can reach 60% sparsity with negligible increase in perplexity: remarkably, more than 100 billion weights from these models can be ignored at inference time. SparseGPT generalizes to semi-structured (2:4 and 4:8) patterns, and is compatible with weight quantization approaches.
translated by 谷歌翻译
Most existing text-video retrieval methods focus on cross-modal matching between the visual content of offline videos and textual query sentences. However, in real scenarios, online videos are frequently accompanied by relevant text information such as titles, tags, and even subtitles, which can be utilized to match textual queries. This inspires us to generate associated captions from offline videos to help with existing text-video retrieval methods. To do so, we propose to use the zero-shot video captioner with knowledge of pre-trained web-scale models (e.g., CLIP and GPT-2) to generate captions for offline videos without any training. Given the captions, one question naturally arises: what can auxiliary captions do for text-video retrieval? In this paper, we present a novel framework Cap4Video, which makes use of captions from three aspects: i) Input data: The video and captions can form new video-caption pairs as data augmentation for training. ii) Feature interaction: We perform feature interaction between video and caption to yield enhanced video representations. iii) Output score: The Query-Caption matching branch can be complementary to the original Query-Video matching branch for text-video retrieval. We conduct thorough ablation studies to demonstrate the effectiveness of our method. Without any post-processing, our Cap4Video achieves state-of-the-art performance on MSR-VTT (51.4%), VATEX (66.6%), MSVD (51.8%), and DiDeMo (52.0%).
translated by 谷歌翻译
Objective: Despite numerous studies proposed for audio restoration in the literature, most of them focus on an isolated restoration problem such as denoising or dereverberation, ignoring other artifacts. Moreover, assuming a noisy or reverberant environment with limited number of fixed signal-to-distortion ratio (SDR) levels is a common practice. However, real-world audio is often corrupted by a blend of artifacts such as reverberation, sensor noise, and background audio mixture with varying types, severities, and duration. In this study, we propose a novel approach for blind restoration of real-world audio signals by Operational Generative Adversarial Networks (Op-GANs) with temporal and spectral objective metrics to enhance the quality of restored audio signal regardless of the type and severity of each artifact corrupting it. Methods: 1D Operational-GANs are used with generative neuron model optimized for blind restoration of any corrupted audio signal. Results: The proposed approach has been evaluated extensively over the benchmark TIMIT-RAR (speech) and GTZAN-RAR (non-speech) datasets corrupted with a random blend of artifacts each with a random severity to mimic real-world audio signals. Average SDR improvements of over 7.2 dB and 4.9 dB are achieved, respectively, which are substantial when compared with the baseline methods. Significance: This is a pioneer study in blind audio restoration with the unique capability of direct (time-domain) restoration of real-world audio whilst achieving an unprecedented level of performance for a wide SDR range and artifact types. Conclusion: 1D Op-GANs can achieve robust and computationally effective real-world audio restoration with significantly improved performance. The source codes and the generated real-world audio datasets are shared publicly with the research community in a dedicated GitHub repository1.
translated by 谷歌翻译
We study the task of learning state representations from potentially high-dimensional observations, with the goal of controlling an unknown partially observable system. We pursue a direct latent model learning approach, where a dynamic model in some latent state space is learned by predicting quantities directly related to planning (e.g., costs) without reconstructing the observations. In particular, we focus on an intuitive cost-driven state representation learning method for solving Linear Quadratic Gaussian (LQG) control, one of the most fundamental partially observable control problems. As our main results, we establish finite-sample guarantees of finding a near-optimal state representation function and a near-optimal controller using the directly learned latent model. To the best of our knowledge, despite various empirical successes, prior to this work it was unclear if such a cost-driven latent model learner enjoys finite-sample guarantees. Our work underscores the value of predicting multi-step costs, an idea that is key to our theory, and notably also an idea that is known to be empirically valuable for learning state representations.
translated by 谷歌翻译
The celebrated FedAvg algorithm of McMahan et al. (2017) is based on three components: client sampling (CS), data sampling (DS) and local training (LT). While the first two are reasonably well understood, the third component, whose role is to reduce the number of communication rounds needed to train the model, resisted all attempts at a satisfactory theoretical explanation. Malinovsky et al. (2022) identified four distinct generations of LT methods based on the quality of the provided theoretical communication complexity guarantees. Despite a lot of progress in this area, none of the existing works were able to show that it is theoretically better to employ multiple local gradient-type steps (i.e., to engage in LT) than to rely on a single local gradient-type step only in the important heterogeneous data regime. In a recent breakthrough embodied in their ProxSkip method and its theoretical analysis, Mishchenko et al. (2022) showed that LT indeed leads to provable communication acceleration for arbitrarily heterogeneous data, thus jump-starting the $5^{\rm th}$ generation of LT methods. However, while these latest generation LT methods are compatible with DS, none of them support CS. We resolve this open problem in the affirmative. In order to do so, we had to base our algorithmic development on new algorithmic and theoretical foundations.
translated by 谷歌翻译
With the proliferation of deep generative models, deepfakes are improving in quality and quantity everyday. However, there are subtle authenticity signals in pristine videos, not replicated by SOTA GANs. We contrast the movement in deepfakes and authentic videos by motion magnification towards building a generalized deepfake source detector. The sub-muscular motion in faces has different interpretations per different generative models which is reflected in their generative residue. Our approach exploits the difference between real motion and the amplified GAN fingerprints, by combining deep and traditional motion magnification, to detect whether a video is fake and its source generator if so. Evaluating our approach on two multi-source datasets, we obtain 97.17% and 94.03% for video source detection. We compare against the prior deepfake source detector and other complex architectures. We also analyze the importance of magnification amount, phase extraction window, backbone network architecture, sample counts, and sample lengths. Finally, we report our results for different skin tones to assess the bias.
translated by 谷歌翻译
In this paper, we study the \underline{R}obust \underline{o}ptimization for \underline{se}quence \underline{Net}worked \underline{s}ubmodular maximization (RoseNets) problem. We interweave the robust optimization with the sequence networked submodular maximization. The elements are connected by a directed acyclic graph and the objective function is not submodular on the elements but on the edges in the graph. Under such networked submodular scenario, the impact of removing an element from a sequence depends both on its position in the sequence and in the network. This makes the existing robust algorithms inapplicable. In this paper, we take the first step to study the RoseNets problem. We design a robust greedy algorithm, which is robust against the removal of an arbitrary subset of the selected elements. The approximation ratio of the algorithm depends both on the number of the removed elements and the network topology. We further conduct experiments on real applications of recommendation and link prediction. The experimental results demonstrate the effectiveness of the proposed algorithm.
translated by 谷歌翻译